SETI@home logo |
|
Developer(s) | University Of California, Berkeley |
---|---|
Initial release | May 17, 1999 |
Stable release | SETI@home Enhanced:6.03 (August 21, 2008 ) SETI@home Enhanced CUDA for NVidia GPU Card:6.08 (January 21, 2009 ) Astropulse v5:5.06 (July 16, 2009 ) |
Preview release | SETI@home Enhanced:6.03 (August 20, 2008 ) SETI@home Enhanced CUDA for NVidia GPU Card:6.09 (August 13, 2009 ) Astropulse v5:5.06 (June 13, 2009 ) SETI@home v7:6.97 (September 8, 2011 ) |
Operating system | Microsoft Windows, Linux, Mac OS X, Solaris,[1] IBM AIX, FreeBSD, DragonflyBSD, OpenBSD, NetBSD, HP-UX, IRIX, Tru64 Unix, OS/2 Warp, eComStation[2] |
Platform | Cross-platform |
Available in | English |
Type | Volunteer computing |
License | GPL[3] |
Website | setiathome.ssl.berkeley.edu |
Average performance | 505 TFLOPS [4] |
Active users | 152,324 |
Total users | 1,229,135 |
Active hosts | 228,182 |
Total hosts | 3,025,721 |
SETI@home ("SETI at home") is an Internet-based public volunteer computing project employing the BOINC software platform, hosted by the Space Sciences Laboratory, at the University of California, Berkeley, in the United States. SETI is an acronym for the Search for Extra-Terrestrial Intelligence. Its purpose is to analyze radio signals, searching for signs of extra terrestrial intelligence, and is one of many activities undertaken as part of SETI.
SETI@home was released to the public on May 17, 1999,[5][6][7] making it the second large-scale use of distributed computing over the Internet for research purposes, as Distributed.net was launched in 1997. Along with MilkyWay@home and Einstein@home, it is the third major computing project of this type that has the investigation of phenomena in interstellar space as its primary purpose.
Contents |
The two original goals of SETI@home were:
The second of these goals is generally considered to have succeeded completely. The current BOINC environment, a development of the original SETI@home, is providing support for many computationally intensive projects in a wide range of disciplines.
The first of these goals has failed to date: no evidence for ETI signals has been shown via SETI@home. However, ongoing continuation is predicated on the assumption that the observational analysis is not an 'ill-posed' one. The remainder of this article deals specifically with the original SETI@home observations/analysis.
SETI@home searches for possible evidence of radio transmissions from extraterrestrial intelligence using observational data from the Arecibo radio telescope. The data is taken 'piggyback' or 'passively' while the telescope is used for other scientific programs. The data is digitized, stored, and sent to the SETI@home facility. The data is then parsed into small chunks in frequency and time, and analyzed, using software, to search for any signals—that is, variations which cannot be ascribed to noise, and contain information. The crux of SETI@home is to have each chunk of data, from the millions of chunks resulting, analyzed off-site by home computers, and then have the software results reported back. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community.
The software searches for five types of signals that distinguish them from noise:[8]
There are many variations on how an ETI signal may be affected by the interstellar medium, and by relative motion of its origin compared to Earth. The potential 'signal' is thus processed in a number of ways (although not testing all detection methods nor scenarios) to ensure the highest likelihood of distinguishing it from the scintillating noise already present in all directions of outer space. For instance, another planet is very likely to be moving at a speed and acceleration with respect to Earth, and that will shift the frequency, over time, of the potential 'signal'. Checking for this through processing is done, to an extent, in the SETI@home software.
The process is somewhat like tuning a radio to various channels, and looking at the signal strength meter. If the strength of the signal goes up, that gets attention. More technically, it involves a lot of digital signal processing, mostly discrete Fourier transforms at various chirp rates and durations.
To date, the project has not confirmed the detection of any ETI signals (see extraterrestrial intelligence). However, it has identified several candidate targets (sky positions), where the spike in intensity is not easily explained as noisespots,[9] for further analysis. The most significant candidate signal to date was announced on September 1, 2004, named Radio source SHGb02+14a.
While the project has not reached the stated primary goal of finding extraterrestrial intelligence, it has proved to the scientific community that distributed computing projects using Internet-connected computers can succeed as a viable analysis tool, and even beat the largest supercomputers.[10] However, it has not been demonstrated that the order of magnitude excess in computers used, many outside the home (the original intent was to use 50,000-100,000 "home" computers),[11] has benefited the project scientifically. (For more on this, see 'threats to project' below.)
Astronomer Seth Shostak stated in 2004 that he expects to get a conclusive signal and proof of alien contact between 2020 and 2025, based on the Drake equation.[12] This implies that a prolonged effort may benefit SETI@home, despite its (present) eleven-year run without success in ETI detection.
Anybody with an at least intermittently Internet-connected computer can participate in SETI@home by running a free program that downloads and analyzes radio telescope data.
Observational Data is recorded on 36 Gigabyte tapes at the Arecibo Observatory in Puerto Rico, each holding 15.5 hours of observations, which is then mailed to Berkeley.[13] Arecibo does not have a high bandwidth Internet connection, so data must go by postal mail to Berkeley at first.[14] Once there, it is divided in both time and frequency domains work units of 107 seconds of data,[15] or approximately 0.35 MB, which overlap in time but not in frequency.[13] These work units then get sent from the SETI@home server over the Internet to personal computers around the world to analyze.
The analysis software can search for signals with about one-tenth the strength of those sought in previous surveys, because it makes use of a computationally intensive algorithm called coherent integration that no one else has had the computing power to implement.
Data is merged into a database using SETI@home computers in Berkeley. Interference is rejected, and various pattern-detection algorithms are applied to search for the most interesting signals.
The SETI@home distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
The initial software platform, now referred to as "SETI@home Classic", ran from May 17, 1999 to December 15, 2005. This program was only capable of running SETI@home; it was replaced by Berkeley Open Infrastructure for Network Computing (BOINC), which also allows users to contribute to other distributed computing projects at the same time as running SETI@home. The BOINC platform will also allow testing for more types of signals.
The discontinuation of the SETI@home Classic platform has rendered older Macintosh computers running pre-Mac OS X versions of the Mac OS unsuitable for participating in the project.
SETI@home is available for the Sony PlayStation3 console.[16]
On May 3, 2006 new work units for a new version of SETI@home called "SETI@home Enhanced" started distribution. Since computers now have the power for more computationally intensive work than when the project began, this new version is more sensitive by a factor of two with respect to Gaussian signals and to some kinds of pulsed signals than the original SETI@home (BOINC) software. This new application has been optimized to the point where it will run faster on some workunits than earlier versions. However, some workunits (the best workunits, scientifically speaking) will take significantly longer.
In addition, some distributions of the SETI@home applications have been optimized for a particular type of CPU. They are referred to as "optimized executables" and have been found to run faster on systems specific for that CPU. As of 2007[update], most of these applications are optimized for Intel processors (and their corresponding instruction sets).[17]
The results of the data processing are normally automatically transmitted when the computer is next connected to the Internet; it can also be instructed to connect to the Internet as needed.
With over 5.2 million participants worldwide, the project is the distributed computing project with the most participants to date. The original intent of SETI@home was to utilize 50,000-100,000 home computers.[11] Since its launch on May 17, 1999, the project has logged over two million years of aggregate computing time. On September 26, 2001, SETI@home had performed a total of 1021 floating point operations. It is acknowledged by the Guinness World Records as the largest computation in history.[18] With over 278,832 active computers in the system (2.4 million total) in 234 countries, as of November 14, 2009, SETI@home has the ability to compute over 769 teraFLOPS.[19] For comparison, the K computer, which as of June 20, 2011 was the world's fastest supercomputer, achieved 8.162 petaFLOPS.
There were future plans to get data from the Parkes Observatory in Australia to analyse the southern hemisphere.[20] However, as of March 9, 2009, these plans were not mentioned in the project's website. Other plans include a Multi-Beam Data Recorder, a Near Time Persistency Checker and Astropulse (an application that uses coherent dedispersion to search for pulsed signals).[21] Astropulse will team with the original Seti@Home to detect other sources, such as rapidly rotating pulsars, exploding primordial black holes, or as-yet unknown astrophysical phenomena.[22] Beta testing of the final public release version of Astropulse was completed in July 2008 and the distribution of work units to higher spec machines capable of processing the more CPU intensive work units started in mid July 2008.
SETI@home users quickly started to compete with one another in an effort to process the maximum number of work units. Teams were formed to combine the efforts of individual users. The competition continued, and grew larger with the introduction of BOINC.
As with any competition, attempts have been made to 'cheat' the system and claim credit for work that has not been performed. To combat cheats, the SETI@Home system sends every workunit to multiple computers, a value known as "initial replication" (currently 2). Credit is only granted for each returned workunit once a minimum number of results have been returned and the results agree, a value known as "minimum quorum" (currently 2). If, due to computation errors or cheating by submitting false data, not enough results agree, more identical workunits are sent out until the minimum quorum can be reached. The final credit granted to all machines which returned the correct result is the same, and is the lowest of the values claimed by each machine. The claimed credit by each machine for an identical workunit often varies due to very minor differences in floating point arithmetic on different processors.
Some users have installed and run SETI@home on computers at their workplaces — an act known as 'Borging', after the assimilation-driven Borg of Star Trek. In some cases, SETI@home users have misused company resources to gain work-unit results — with at least two individuals getting fired for running SETI@home on an enterprise production system.[23] There is a thread in the newsgroup alt.sci.seti which bears the title "Anyone fired for SETI screensaver" and ran starting as early as September 14, 1999.
Other users collected large quantities of equipment together at home to create "SETI farms", which typically consist of a number of computers consisting of only a motherboard, CPU, RAM and power supply that are arranged on shelves as diskless workstations running either Linux or old versions of Microsoft Windows "headless" (without a monitor).[24]
Like any project of prolonged duration, there are factors that may result in its termination. Some of these are detailed below:
At present, SETI@home procures its data from the Arecibo Observatory facility operated by the National Astronomy and Ionosphere Center and administered by Cornell University.
The decreasing operating budget for the observatory has created a shortfall of funds which has not been made up from other sources such as private donors, NASA, other foreign research institutions, nor private non-profit organizations such as SETI@home. The National Science Foundation has made it clear that Arecibo will close in 2011 without such funds, and therefore the present data stream for SETI@home would cease in that situation.
However, in the overall longterm views held by many involved with the SETI project, any usable radio telescope could take over from Arecibo, as all the SETI systems are portable and relocatable.
When the project was launched there were few alternative ways of donating computer time to research projects. However, there are now many other projects that are competing for such time.
In one documented case, an individual was fired for explicitly importing and using the SETI@home software on computers used for the U.S. state of Ohio.[25] In another incident a school IT director resigned after his installation reportedly cost his school district $1 million in removal costs.[26] Police are investigating the incident.
As of October 16, 2005, approximately one third of the processing for the non-BOINC version of the software was performed on work or school based machines.[27] As many of these computers will give reduced privileges to ordinary users, it is possible that much of this has been done by network administrators.
To some extent, this may be offset by better connectivity to home machines and increasing performance of home computers.
There is currently no government funding for SETI research, and private funding is always limited. Berkeley Space Science Lab has found ways of working with small budgets and the project has received donations allowing it to go well beyond its original planned duration, but it still has to compete for limited funds with other SETI projects and other space sciences projects.
In a December 16, 2007 plea for donations, SETI@home stated its present modest state and urged donations for $476,000 needed for continuation into 2008.
A number of individuals and companies made unofficial changes to the distributed part of the software to try to produce faster results, but this compromised the integrity of all the results.[28] As a result, the software had to be updated to make it easier to detect such changes, and discover unreliable clients. BOINC will run on unofficial clients; however, clients that return different and therefore incorrect data are not allowed, so corrupting the result database is avoided. BOINC relies on cross-checking to validate data[29] but unreliable clients need to be identified, to avoid situations when two of these report the same invalid data and therefore corrupt the database. A very popular unofficial client allows users to take advantage special features provided by their processor(s) such as SSE, SSE2, SSE3, SSSE3 and SSE4.1 to allow for faster processing. The only downside to this is that if the user selects features that their processor(s) do not support, the chances of bad results and crashes rise significantly. Tools are freely available to tell users what features are supported by their processor(s).
Currently SETI@Home is a test bed for further development not only of BOINC but of other hardware and software (database) technology. Under SETI processing loads these experimental technologies can be more bleeding edge than leading edge. SETI databases don't have typical accounting and business data or relational structures. Non-traditional database uses often do incur greater processing overheads and risk of database corruption and outright database failure. Hardware, software and database failures can (and do) cause dips in project participation.
The project has had to shut down several times to change over to new databases capable of handing larger datasets. Hardware failure has proven to be a substantial source of project shutdowns—as hardware failure is often coupled with database corruption.
Project news (Oct 28, 2010)
|
|